167 research outputs found

    Deuterium retention in tungsten and tungsten: tantalum alloys exposed to high-flux deuterium plasmas

    Get PDF
    A direct comparison of deuterium retention in samples of tungsten and two grades of tungsten-tantalum alloys-W-1% Ta and W-5% Ta, exposed to deuterium plasmas (ion flux similar to 10(24) m(-2) s(-1), ion energy at the biased target similar to 50 eV) at the plasma generator Pilot-PSI was performed using thermal desorption spectroscopy (TDS). No systematic difference in terms of total retention in tungsten and tungsten-tantalum was identified. The measured retention value for each grade did not deviate by more than 24% from the value averaged over the three grades exposed to the same conditions. No additional desorption peaks appeared in the TDS spectra of the W-Ta samples as compared with the W target, indicating that no additional kinds of traps are introduced by the alloying of W with Ta. In the course of the experiment the same samples were exposed to the same plasma conditions several times, and it is demonstrated that samples with the history of prior exposures yield an increase in deuterium retention of up to 130% under the investigated conditions compared with the samples that were not exposed before. We consider this as evidence that exposure of the considered materials to ions with energy below the displacement threshold generates additional traps for deuterium. The positions of the release peaks caused by these traps are similar for W and W-Ta, which indicates that the corresponding traps are of the same kind

    Error, reproducibility and sensitivity : a pipeline for data processing of Agilent oligonucleotide expression arrays

    Get PDF
    Background Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2% of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log2 units ( 6% of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells

    Innovation and Access to Medicines for Neglected Populations: Could a Treaty Address a Broken Pharmaceutical R&D System?

    Get PDF
    As part of a cluster of articles leading up to the 2012 World Health Report and critically reflecting on the theme of “no health without research,” Suerie Moon and colleagues argue for a global health R&D treaty to improve innovation in new medicines and strengthening affordability, sustainable financing, efficiency in innovation, and equitable health-centered governance

    Mapping polyclonal antibody responses to bacterial infection using next generation phage display

    Get PDF
    Mapping polyclonal antibody responses to infectious diseases to identify individual epitopes has the potential to underpin the development of novel serological assays and vaccines. Here, phage-peptide library panning coupled with screening using next generation sequencing was used to map antibody responses to bacterial infections. In the first instance, pigs experimentally infected with Salmonella enterica serovar Typhimurium was investigated. IgG samples from twelve infected pigs were probed in parallel and phage binding compared to that with equivalent IgG from pre-infected animals. Seventy- seven peptide mimotopes were enriched specifically against sera from multiple infected animals. Twenty-seven of these peptides were tested in ELISA and twenty-two were highly discriminatory for sera taken from pigs post-infection (P < 0.05) indicating that these peptides are mimicking epitopes from the bacteria. In order to further test this methodology, it was applied to differentiate antibody responses in poultry to infections with distinct serovars of Salmonella enterica. Twenty-seven peptides were identified as being enriched specifically against IgY from multiple animals infected with S. Enteritidis compared to those infected with S. Hadar. Nine of fifteen peptides tested in ELISA were highly discriminatory for IgY following S. Enteritidis infection (p < 0.05) compared to infections with S. Hadar or S. Typhimurium

    Next-generation sequencing of vertebrate experimental organisms

    Get PDF
    Next-generation sequencing technologies are revolutionizing biology by allowing for genome-wide transcription factor binding-site profiling, transcriptome sequencing, and more recently, whole-genome resequencing. While it is currently not possible to generate complete de novo assemblies of higher-vertebrate genomes using next-generation sequencing, improvements in sequence read lengths and throughput, coupled with new assembly algorithms for large data sets, will soon make this a reality. These developments will in turn spawn a revolution in how genomic data are used to understand genetics and how model organisms are used for disease gene discovery. This review provides an overview of the current next-generation sequencing platforms and the newest computational tools for the analysis of next-generation sequencing data. We also describe how next-generation sequencing may be applied in the context of vertebrate model organism genetics

    Targeted Skipping of Human Dystrophin Exons in Transgenic Mouse Model Systemically for Antisense Drug Development

    Get PDF
    Antisense therapy has recently been demonstrated with great potential for targeted exon skipping and restoration of dystrophin production in cultured muscle cells and in muscles of Duchenne Muscular Dystrophy (DMD) patients. Therapeutic values of exon skipping critically depend on efficacy of the drugs, antisense oligomers (AOs). However, no animal model has been established to test AO targeting human dystrophin exon in vivo systemically. In this study, we applied Vivo-Morpholino to the hDMD/mdx mouse, a transgenic model carrying the full-length human dystrophin gene with mdx background, and achieved for the first time more than 70% efficiency of targeted human dystrophin exon skipping in vivo systemically. We also established a GFP-reporter myoblast culture to screen AOs targeting human dystrophin exon 50. Antisense efficiency for most AOs is consistent between the reporter cells, human myoblasts and in the hDMD/mdx mice in vivo. However, variation in efficiency was also clearly observed. A combination of in vitro cell culture and a Vivo-Morpholino based evaluation in vivo systemically in the hDMD/mdx mice therefore may represent a prudent approach for selecting AO drug and to meet the regulatory requirement

    Protocol Dependence of Sequencing-Based Gene Expression Measurements

    Get PDF
    RNA Seq provides unparalleled levels of information about the transcriptome including precise expression levels over a wide dynamic range. It is essential to understand how technical variation impacts the quality and interpretability of results, how potential errors could be introduced by the protocol, how the source of RNA affects transcript detection, and how all of these variations can impact the conclusions drawn. Multiple human RNA samples were used to assess RNA fragmentation, RNA fractionation, cDNA synthesis, and single versus multiple tag counting. Though protocols employing polyA RNA selection generate the highest number of non-ribosomal reads and the most precise measurements for coding transcripts, such protocols were found to detect only a fraction of the non-ribosomal RNA in human cells. PolyA RNA excludes thousands of annotated and even more unannotated transcripts, resulting in an incomplete view of the transcriptome. Ribosomal-depleted RNA provides a more cost-effective method for generating complete transcriptome coverage. Expression measurements using single tag counting provided advantages for assessing gene expression and for detecting short RNAs relative to multi-read protocols. Detection of short RNAs was also hampered by RNA fragmentation. Thus, this work will help researchers choose from among a range of options when analyzing gene expression, each with its own advantages and disadvantages
    corecore